25 research outputs found

    Semantischer Schutz und Personalisierung von Videoinhalten. PIAF: MPEG-kompatibles Multimedia-Adaptierungs-Framework zur Bewahrung der vom Nutzer wahrgenommenen Qualität.

    Get PDF
    UME is the notion that a user should receive informative adapted content anytime and anywhere. Personalization of videos, which adapts their content according to user preferences, is a vital aspect of achieving the UME vision. User preferences can be translated into several types of constraints that must be considered by the adaptation process, including semantic constraints directly related to the content of the video. To deal with these semantic constraints, a fine-grained adaptation, which can go down to the level of video objects, is necessary. The overall goal of this adaptation process is to provide users with adapted content that maximizes their Quality of Experience (QoE). This QoE depends at the same time on the level of the user's satisfaction in perceiving the adapted content, the amount of knowledge assimilated by the user, and the adaptation execution time. In video adaptation frameworks, the Adaptation Decision Taking Engine (ADTE), which can be considered as the "brain" of the adaptation engine, is responsible for achieving this goal. The task of the ADTE is challenging as many adaptation operations can satisfy the same semantic constraint, and thus arising in several feasible adaptation plans. Indeed, for each entity undergoing the adaptation process, the ADTE must decide on the adequate adaptation operator that satisfies the user's preferences while maximizing his/her quality of experience. The first challenge to achieve in this is to objectively measure the quality of the adapted video, taking into consideration the multiple aspects of the QoE. The second challenge is to assess beforehand this quality in order to choose the most appropriate adaptation plan among all possible plans. The third challenge is to resolve conflicting or overlapping semantic constraints, in particular conflicts arising from constraints expressed by owner's intellectual property rights about the modification of the content. In this thesis, we tackled the aforementioned challenges by proposing a Utility Function (UF), which integrates semantic concerns with user's perceptual considerations. This UF models the relationships among adaptation operations, user preferences, and the quality of the video content. We integrated this UF into an ADTE. This ADTE performs a multi-level piecewise reasoning to choose the adaptation plan that maximizes the user-perceived quality. Furthermore, we included intellectual property rights in the adaptation process. Thereby, we modeled content owner constraints. We dealt with the problem of conflicting user and owner constraints by mapping it to a known optimization problem. Moreover, we developed the SVCAT, which produces structural and high-level semantic annotation according to an original object-based video content model. We modeled as well the user's preferences proposing extensions to MPEG-7 and MPEG-21. All the developed contributions were carried out as part of a coherent framework called PIAF. PIAF is a complete modular MPEG standard compliant framework that covers the whole process of semantic video adaptation. We validated this research with qualitative and quantitative evaluations, which assess the performance and the efficiency of the proposed adaptation decision-taking engine within PIAF. The experimental results show that the proposed UF has a high correlation with subjective video quality evaluation.Der Begriff "Universal Multimedia Experience" (UME) beschreibt die Vision, dass ein Nutzer nach seinen individuellen Vorlieben zugeschnittene Videoinhalte konsumieren kann. In dieser Dissertation werden im UME nun auch semantische Constraints berücksichtigt, welche direkt mit der Konsumierung der Videoinhalte verbunden sind. Dabei soll die Qualität der Videoerfahrung für den Nutzer maximiert werden. Diese Qualität ist in der Dissertation durch die Benutzerzufriedenheit bei der Wahrnehmung der Veränderung der Videos repräsentiert. Die Veränderung der Videos wird durch eine Videoadaptierung erzeugt, z.B. durch die Löschung oder Veränderung von Szenen, Objekten, welche einem semantischen Constraints nicht entsprechen. Kern der Videoadaptierung ist die "Adaptation Decision Taking Engine" (ADTE). Sie bestimmt die Operatoren, welche die semantischen Constraints auflösen, und berechnet dann mögliche Adaptierungspläne, die auf dem Video angewandt werden sollen. Weiterhin muss die ADTE für jeden Adaptierungsschritt anhand der Operatoren bestimmen, wie die Vorlieben des Nutzers berücksichtigt werden können. Die zweite Herausforderung ist die Beurteilung und Maximierung der Qualität eines adaptierten Videos. Die dritte Herausforderung ist die Berücksichtigung sich widersprechender semantischer Constraints. Dies betrifft insbesondere solche, die mit Urheberrechten in Verbindung stehen. In dieser Dissertation werden die oben genannten Herausforderungen mit Hilfe eines "Personalized video Adaptation Framework" (PIAF) gelöst, welche auf den "Moving Picture Expert Group" (MPEG)-Standard MPEG-7 und MPEG-21 basieren. PIAF ist ein Framework, welches den gesamten Prozess der Videoadaptierung umfasst. Es modelliert den Zusammenhang zwischen den Adaptierungsoperatoren, den Vorlieben der Nutzer und der Qualität der Videos. Weiterhin wird das Problem der optimalen Auswahl eines Adaptierungsplans für die maximale Qualität der Videos untersucht. Dafür wird eine Utility Funktion (UF) definiert und in der ADTE eingesetzt, welche die semantischen Constraints mit den vom Nutzer ausgedrückten Vorlieben vereint. Weiterhin ist das "Semantic Video Content Annotation Tool" (SVCAT) entwickelt worden, um strukturelle und semantische Annotationen durchzuführen. Ebenso sind die Vorlieben der Nutzer mit MPEG-7 und MPEG-21 Deskriptoren berücksichtigt worden. Die Entwicklung dieser Software-Werkzeuge und Algorithmen ist notwendig, um ein vollständiges und modulares Framework zu erhalten. Dadurch deckt PIAF den kompletten Bereich der semantischen Videoadaptierung ab. Das ADTE ist in qualitativen und quantitativen Evaluationen validiert worden. Die Ergebnisse der Evaluation zeigen unter anderem, dass die UF im Bereich Qualität eine hohe Korrelation mit der subjektiven Wahrnehmung von ausgewählten Nutzern aufweist

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    A Multi-level Access Control Scheme for Multimedia Database

    No full text
    International audienceSecurity of multimedia database systems becomes a critical problem, especially with theproliferation of multimedia data and applications. One of the most challenging issues isto provide a content-based multimedia database access control that efficiently handlesdifferent user’s access with possible fine-grained restrictions at a specific level of themultimedia data. However, the realization of such a model depends on other relatedresearch issues: (a) Efficient multimedia data analysis for supporting semantic visualconcept representation; (b) Practical representation of the multimedia database; (c)Effective multimedia database indexing structure for content-based retrieval; (d)Development of a suitable access control

    Protection sémantique et personnalisation du contenu de la vidéo PIAF : Un framework d'adaptation conforme à MPEG préservant la qualité perçue de l'utilisateur

    No full text
    Dans cette thèse, nous proposons un framework d’adaptation appelé "Personalized vIdeo Adaptation Framework" (PIAF) conçu à partir des standards MPEG. PIAF intègre les contraintes sémantiques et vise à maximiser la qualité perçue par l’utilisateur lors de la visualisation de la vidéo tout en respectant les droits de propriété intellectuelle. Les contributions de cette thèse peuvent être résumées comme suit. Dans un premier temps, nous avons utilisé et étendu les standards MPEG-7 et MPEG-21 afin de représenter les préférences des utilisateurs. Nous avons ensuite proposé un modèle formel du processus d’adaptation sémantique d’une vidéo et défini une fonction d’utilité régissant le mécanisme de prise de décision du MPDA. Cette fonction tient compte de différentes dimensions de qualité (qualité perceptuelle, sémantique, temps d’exécution nécessaire) afin d’évaluer quantitativement la qualité d’un plan d’adaptation. Le processus d’adaptation que nous proposons intègre les droits de propriété intellectuelle dans le processus de décision. Dans certains cas, le plan d’adaptation qui produirait la vidéo de meilleure qualité adaptée aux préférences de l’utilisateur peut être inapplicable car il ne respecte pas les contraintes du propriétaire. Trouver le meilleur plan d’adaptation devient alors un problème NP-complet. Nous avons proposé une solution pratique à ce problème sous la forme d’une heuristique capable de sélectionner un plan très proche de l’optimum en un temps de calcul raisonnable. Afin d’implémenter ce framework, nous avons également développé un outil d’annotation sémantique de contenu vidéo (SVCAT) qui produit des annotations sémantiques structurelles et de haut niveau selon un modèle objet basé sur du contenu vidéo. Nous avons validé nos travaux avec des évaluations qualitatives et quantitatives qui nous ont permis d’étudier la performance et l'efficacité du MPDA. Nous avons validé nos travaux avec des évaluations qualitatives et quantitatives qui nous ont permis d’étudier la performance et l'efficacité du MPDA. Les résultats obtenus démontrent que la fonction d’utilité proposée présente une forte corrélation avec les évaluations subjectives fournies par des utilisateurs concernant la qualité d’une vidéo adaptée, et constitue donc une base tout à fait pertinente pour le MPDA.Universal Multimedia Experience (UME) is the notion that a user should receive informative adapted content anytime and anywhere. Personalization of videos, which adapts their content according to user preferences, is a vital aspect of achieving the UME vision. User preferences can be translated into several types of constraints that must be considered by the adaptation process, including semantic constraints directly related to the content of the video. The overall goal of this adaptation process is to provide users with adapted content that maximizes their Quality of Experience (QoE). This QoE depends at the same time on the level of the user's satisfaction in perceiving the adapted content, the amount of knowledge assimilated by them, and the adaptation execution time. In video adaptation frameworks, the Adaptation Decision Taking Engine (ADTE), is responsible for achieving this goal. The task of the ADTE is challenging as many adaptation operations can satisfy the same semantic constraint, and thus arising in several feasible adaptation plans. Indeed, for each entity to be adapted, the ADTE must decide on the adequate adaptation operator that satisfies the user's preferences while maximizing his/her quality of experience. The first challenge to achieve in this is to objectively measure the quality of the adapted video, while considering the multiple aspects of the QoE. The second challenge is to assess beforehand this quality in order to choose the most appropriate adaptation plan among all possible ones. The third challenge is to resolve conflicting or overlapping semantic constraints, in particular conflicts arising from constraints expressed by owner's intellectual property rights (IPR) about the modification of the content. In this thesis, we tackled the aforementioned challenges by proposing a Utility Function (UF), which integrates semantic concerns with user's perceptual considerations. This UF models the relationships among adaptation operations, user preferences, and the quality of the video content. We integrated this UF into an ADTE. This ADTE performs a multi-level piecewise reasoning to choose the adaptation plan that maximizes the user-perceived quality. Furthermore, we included IPR in the adaptation process. Thereby, we modeled content owner constraints, and proposed a heuristic to resolve conflicting user and owner constraints. More, we developed SVCAT, which produces structural and high-level semantic annotation according to an original object-based video content model. We modeled as well the user's preferences proposing extensions to MPEG-7 and MPEG-21. All the developed contributions were carried out as part of a coherent framework called PIAF. We validated this research with qualitative and quantitative evaluations, which assess the performance and the efficiency of the proposed adaptation decision-taking engine within PIAF

    Protection sémantique et personnalisation du contenu de la vidéo PIAF : Un framework d'adaptation conforme à MPEG préservant la qualité perçue de l'utilisateur

    No full text
    Universal Multimedia Experience (UME) is the notion that a user should receive informative adapted content anytime and anywhere. Personalization of videos, which adapts their content according to user preferences, is a vital aspect of achieving the UME vision. User preferences can be translated into several types of constraints that must be considered by the adaptation process, including semantic constraints directly related to the content of the video. The overall goal of this adaptation process is to provide users with adapted content that maximizes their Quality of Experience (QoE). This QoE depends at the same time on the level of the user's satisfaction in perceiving the adapted content, the amount of knowledge assimilated by them, and the adaptation execution time. In video adaptation frameworks, the Adaptation Decision Taking Engine (ADTE), is responsible for achieving this goal. The task of the ADTE is challenging as many adaptation operations can satisfy the same semantic constraint, and thus arising in several feasible adaptation plans. Indeed, for each entity to be adapted, the ADTE must decide on the adequate adaptation operator that satisfies the user's preferences while maximizing his/her quality of experience. The first challenge to achieve in this is to objectively measure the quality of the adapted video, while considering the multiple aspects of the QoE. The second challenge is to assess beforehand this quality in order to choose the most appropriate adaptation plan among all possible ones. The third challenge is to resolve conflicting or overlapping semantic constraints, in particular conflicts arising from constraints expressed by owner's intellectual property rights (IPR) about the modification of the content. In this thesis, we tackled the aforementioned challenges by proposing a Utility Function (UF), which integrates semantic concerns with user's perceptual considerations. This UF models the relationships among adaptation operations, user preferences, and the quality of the video content. We integrated this UF into an ADTE. This ADTE performs a multi-level piecewise reasoning to choose the adaptation plan that maximizes the user-perceived quality. Furthermore, we included IPR in the adaptation process. Thereby, we modeled content owner constraints, and proposed a heuristic to resolve conflicting user and owner constraints. More, we developed SVCAT, which produces structural and high-level semantic annotation according to an original object-based video content model. We modeled as well the user's preferences proposing extensions to MPEG-7 and MPEG-21. All the developed contributions were carried out as part of a coherent framework called PIAF. We validated this research with qualitative and quantitative evaluations, which assess the performance and the efficiency of the proposed adaptation decision-taking engine within PIAF.Dans cette thèse, nous proposons un framework d’adaptation appelé "Personalized vIdeo Adaptation Framework" (PIAF) conçu à partir des standards MPEG. PIAF intègre les contraintes sémantiques et vise à maximiser la qualité perçue par l’utilisateur lors de la visualisation de la vidéo tout en respectant les droits de propriété intellectuelle. Les contributions de cette thèse peuvent être résumées comme suit. Dans un premier temps, nous avons utilisé et étendu les standards MPEG-7 et MPEG-21 afin de représenter les préférences des utilisateurs. Nous avons ensuite proposé un modèle formel du processus d’adaptation sémantique d’une vidéo et défini une fonction d’utilité régissant le mécanisme de prise de décision du MPDA. Cette fonction tient compte de différentes dimensions de qualité (qualité perceptuelle, sémantique, temps d’exécution nécessaire) afin d’évaluer quantitativement la qualité d’un plan d’adaptation. Le processus d’adaptation que nous proposons intègre les droits de propriété intellectuelle dans le processus de décision. Dans certains cas, le plan d’adaptation qui produirait la vidéo de meilleure qualité adaptée aux préférences de l’utilisateur peut être inapplicable car il ne respecte pas les contraintes du propriétaire. Trouver le meilleur plan d’adaptation devient alors un problème NP-complet. Nous avons proposé une solution pratique à ce problème sous la forme d’une heuristique capable de sélectionner un plan très proche de l’optimum en un temps de calcul raisonnable. Afin d’implémenter ce framework, nous avons également développé un outil d’annotation sémantique de contenu vidéo (SVCAT) qui produit des annotations sémantiques structurelles et de haut niveau selon un modèle objet basé sur du contenu vidéo. Nous avons validé nos travaux avec des évaluations qualitatives et quantitatives qui nous ont permis d’étudier la performance et l'efficacité du MPDA. Nous avons validé nos travaux avec des évaluations qualitatives et quantitatives qui nous ont permis d’étudier la performance et l'efficacité du MPDA. Les résultats obtenus démontrent que la fonction d’utilité proposée présente une forte corrélation avec les évaluations subjectives fournies par des utilisateurs concernant la qualité d’une vidéo adaptée, et constitue donc une base tout à fait pertinente pour le MPDA

    Utility Function for Semantic Video Content Adaptation

    No full text
    International audienceThe vision of Universal Multimedia Access (UMA) and Universal Multimedia Experience (UME) has driven research in the multimedia community for a long time. Implementing content adaptation frameworks to satisfy heterogeneous types of constraints is among the main requirements for UMA and UME. At the core of these frameworks lies the adaptation decision engine, which computes the appropriate adaptation plans. Though much research has already been done in this domain, the problem of semantic constraints has generally been neglected. This paper addresses the problem of selecting the optimal adaptation operation to satisfy semantic constraints while maximizing the utility of the adapted video. To this end, we define a utility function that computes a value for each possible adaptation operation. We represent our utility function using the MPEG-21 Digital Item Adaptation (DIA) tools. This facilitates the integration of semantic constraints with other types of constraints in MPEG-21 Universal Constraint Description format

    Distributed Key Management in Dynamic Outsourced Databases : a Trie-Based Approach

    No full text
    International audienceThe decision to outsource databases is strategic in manyorganizations due to the increasing costs of internally managinglarge volumes of information. The sensitive nature ofthis information raises the need for powerful mechanismsto protect it against unauthorized disclosure. Centralizedencryption to access control at the data owner level hasbeen proposed as one way of handling this issue. However,its prohibitive costs renders it impractical and inflexible. Adistributed cryptographic approach has been suggested as apromising alternative, where keys are distributed to users onthe basis of their assigned privileges. But in this case, keymanagement becomes problematic in the face of frequentdatabase updates and remains an open issue. In this paper,we present a novel approach based on Binary Tries1. By exploitingthe intrinsic properties of these data structures, keymanagement complexity, and thus its cost, is significantlyreduced. Changes to the Binary Trie structure remain limitedin the face of frequent updates. Preliminary experimentalanalysis demonstrates the validity and the effectivenessof our approach

    Personalized vIdeo Adaptation Framework (PIAF): High-Level Semantics Adaptation

    No full text
    International audienceDespite much work on Universal Multimedia Experience (UME), existing video adaptation approaches cannot yet be considered as truly user-centric, mostly due to their poor handling of semantic user preferences. Indeed, these works mainly concentrate on lower-level user preferences but do neither consider any fine-grained object-level adaptation nor evaluate different adaptation options based on predicted user expectations. Moreover, these works do not provide owners with property rights that enable them to place restrictions on the types of modifications to be made to the video content. To address these shortcomings, we propose the Personalized vIdeo Adaptation Framework (PIAF) for high-level semantic video adaptation. PIAF is a fully integrated framework providing all the requirements for a semantic video adaptation. It defines a video annotation model and a user profile model comprising semantic constraints that are delineated in a consistent way, based on the standards MPEG-7 and MPEG-21. At the heart of the framework, the Adaptation Decision Taking Engine (ADTE) computes utility values for different adaptation options, considering each shot separately. The corresponding utility function evaluates the possible choices by evaluating multiple parameters that capture different dimensions of a multimedia experience: amount of modified content, modifications to key objects and shots with respect to the semantic integrity of the original content, expected processing cost of the adaptation, and the anticipated visual and temporal quality of the adapted content. Furthermore, the ADTE can deal with intellectual property issues by selecting an adaptation plan of good quality that also satisfies constraints specified by the content owner. This paper places a significant emphasis on theoretical details of the utility function and the computation of the adaptation plan. It also presents the results and evaluation of the adaptation process both in simulation and user study

    Semantic Video Content Annotation at the Object Level

    No full text
    International audienceA vital prerequisite for fine-grained video content processing (indexing, querying, retrieval, adaptation, etc.) is the production of accurate metadata describing its structure and semantics. Several annotation tools were presented in the literature generating metadata at different granularities (i.e. scenes, shots, frames, objects). These tools have a number of limitations with respect to the annotation of objects. Though they provide functionalities to localize and annotate an object in a frame, the propagation of this information in the next frames still requires human intervention. Furthermore, they are based on video models that lack expressiveness along the spatial and semantic dimensions. To address these shortcomings, we propose the Semantic Video Content Annotation Tool (SVCAT) for structural and high-level semantic annotation. SVCAT is a semi-automatic annotation tool compliant to the MPEG-7 standard, which produces metadata according to an object-based video content model described in this paper. In particular, the novelty of SVCAT lies in its automatic propagation of the object localization and description metadata realized by tracking their contour through the video, thus drastically alleviating the task of the annotator. Experimental results show that SVCAT provides accurate metadata to object-based applications, particularly exact contours of multiple deformable objects

    Personalized vIdeo Adaptation Framework (PIAF): High-Level Semantics Adaptation

    No full text
    International audienceDespite much work on Universal Multimedia Experience (UME), existing video adaptation approaches cannot yet be considered as truly user-centric, mostly due to their poor handling of semantic user preferences. Indeed, these works mainly concentrate on lower-level user preferences but do neither consider any fine-grained object-level adaptation nor evaluate different adaptation options based on predicted user expectations. Moreover, these works do not provide owners with property rights that enable them to place restrictions on the types of modifications to be made to the video content. To address these shortcomings, we propose the Personalized vIdeo Adaptation Framework (PIAF) for high-level semantic video adaptation. PIAF is a fully integrated framework providing all the requirements for a semantic video adaptation. It defines a video annotation model and a user profile model comprising semantic constraints that are delineated in a consistent way, based on the standards MPEG-7 and MPEG-21. At the heart of the framework, the Adaptation Decision Taking Engine (ADTE) computes utility values for different adaptation options, considering each shot separately. The corresponding utility function evaluates the possible choices by evaluating multiple parameters that capture different dimensions of a multimedia experience: amount of modified content, modifications to key objects and shots with respect to the semantic integrity of the original content, expected processing cost of the adaptation, and the anticipated visual and temporal quality of the adapted content. Furthermore, the ADTE can deal with intellectual property issues by selecting an adaptation plan of good quality that also satisfies constraints specified by the content owner. This paper places a significant emphasis on theoretical details of the utility function and the computation of the adaptation plan. It also presents the results and evaluation of the adaptation process both in simulation and user study
    corecore